122 research outputs found

    A Tutorial Introduction to Mosaic Pascal

    Get PDF
    In this report we describe a Pascal system that has been developed for programming Mosaic multi- computers. The system that we discuss runs on our Sun workstations, and we assume some familiarity with the use thereof. We assume the reader to be also familiar with programming in Pascal, and with message-passing programs. We describe how the Pascal language has been extended to perform message passing. We discuss a few implementation aspects that are relevant only to those users who have a need (or desire) to control some machine-specific aspects. The latter requires some detailed knowledge of the Mosaic system

    Weakest Preconditions for Progress

    Get PDF
    Predicate transformers that map the postcondition and all intermediate conditions of a command to a precondition are introduced. They can be used to specify certain progress properties of sequential programs

    Parallel Program Design and Generalized Weakest Preconditions

    Get PDF
    No abstract available

    Steady-State Properties of Single-File Systems with Conversion

    Get PDF
    We have used Monte-Carlo methods and analytical techniques to investigate the influence of the characteristic parameters, such as pipe length, diffusion, adsorption, desorption and reaction rate constants on the steady-state properties of Single-File Systems with a reaction. We looked at cases when all the sites are reactive and when only some of them are reactive. Comparisons between Mean-Field predictions and Monte-Carlo simulations for the occupancy profiles and reactivity are made. Substantial differences between Mean-Field and the simulations are found when rates of diffusion are high. Mean-Field results only include Single-File behavior by changing the diffusion rate constant, but it effectively allows passing of particles. Reactivity converges to a limit value if more reactive sites are added: sites in the middle of the system have little or no effect on the kinetics. Occupancy profiles show approximately exponential behavior from the ends to the middle of the system.Comment: 15 pages, 20 figure

    A Distributed Implementation of a Task Pool

    Get PDF
    In this paper we present a distributed algorithm to implement a task pool. The algorithm can be used to implement a processor farm, i.e., a collection of processes that consume tasks from the task pool and possibly produce tasks into it. There are no restrictions on which process consumes which task nor on the order in which tasks are processed. The algorithm takes care of the distribution of the tasks over the processes and ensures load balancing. We derive the algorithm by transforming a sequential algorithm into a distributed one. The transformation is guided by the distribution of the data over processes. First we discuss the case of two processes, and then the general case of one or more processes

    Exact results for the reactivity of a single-file system

    Get PDF
    We derive analytical expressions for the reactivity of a Single-File System with fast diffusion and adsorption and desorption at one end. If the conversion reaction is fast, then the reactivity depends only very weakly on the system size, and the conversion is about 100%. If the reaction is slow, then the reactivity becomes proportional to the system size, the loading, and the reaction rate constant. If the system size increases the reactivity goes to the geometric mean of the reaction rate constant and the rate of adsorption and desorption. For large systems the number of nonconverted particles decreases exponentially with distance from the adsorption/desorption end.Comment: 4 pages, 2 figure

    Transient behavior in Single-File Systems

    Get PDF
    We have used Monte-Carlo methods and analytical techniques to investigate the influence of the characteristics, such as pipe length, diffusion, adsorption, desorption and reaction rates on the transient properties of Single-File Systems. The transient or the relaxation regime is the period in which the system is evolving to equilibrium. We have studied the system when all the sites are reactive and when only some of them are reactive. Comparisons between Mean-Field predictions, Cluster Approximation predictions, and Monte Carlo simulations for the relaxation time of the system are shown. We outline the cases where Mean-Field analysis gives good results compared to Dynamic Monte-Carlo results. For some specific cases we can analytically derive the relaxation time. Occupancy profiles for different distribution of the sites both for Mean-Field and simulations are compared. Different results for slow and fast reaction systems and different distribution of reactive sites are discussed.Comment: 18 pages, 19 figure

    Systematic reduction of Hyperspectral Images for high-throughput Plastic Characterization

    Full text link
    Hyperspectral Imaging (HSI) combines microscopy and spectroscopy to assess the spatial distribution of spectroscopically active compounds in objects, and has diverse applications in food quality control, pharmaceutical processes, and waste sorting. However, due to the large size of HSI datasets, it can be challenging to analyze and store them within a reasonable digital infrastructure, especially in waste sorting where speed and data storage resources are limited. Additionally, as with most spectroscopic data, there is significant redundancy, making pixel and variable selection crucial for retaining chemical information. Recent high-tech developments in chemometrics enable automated and evidence-based data reduction, which can substantially enhance the speed and performance of Non-Negative Matrix Factorization (NMF), a widely used algorithm for chemical resolution of HSI data. By recovering the pure contribution maps and spectral profiles of distributed compounds, NMF can provide evidence-based sorting decisions for efficient waste management. To improve the quality and efficiency of data analysis on hyperspectral imaging (HSI) data, we apply a convex-hull method to select essential pixels and wavelengths and remove uninformative and redundant information. This process minimizes computational strain and effectively eliminates highly mixed pixels. By reducing data redundancy, data investigation and analysis become more straightforward, as demonstrated in both simulated and real HSI data for plastic sorting

    Power-managed smart lighting using a semantic interoperability architecture

    Get PDF
    Abstract-This paper presents a power-managed smart lighting system that allows collaboration of lighting consumer electronics (CE) devices and corresponding system architectures provided by different CE suppliers. In the example scenario, the rooms of a building are categorized as low and high priority, each category utilizing a different system architecture. The rooms collaborate through a semantic interoperability platform. The overall smart lighting system conforms to a power quota regime and maintains a target power consumption level by automatically adjusting lights in the building

    Runtime evaluation of cognitive systems for non-deterministic multiple output classification problems

    Get PDF
    Cognitive applications that involve complex decision making such as smart lighting have non-deterministic input-output relationships, i.e., more than one output may be acceptable for a given input. We refer them as non-deterministic multiple output classification (nDMOC) problems, which are particularly difficult for machine learning (ML) algorithms to predict outcomes accurately. Evaluating ML algorithms based on commonly used metrics such as Classification Accuracy (CA) is not appropriate. In a batch setting, Relevance Score (RS) was proposed as a better alternative, which determines how relevant a predicted output is to a given context. We introduce two variants of RS to evaluate ML algorithms in an online setting. Furthermore, we evaluate the algorithms using different metrics for two datasets that have non-deterministic input-output relationships. We show that instance-based learning provides superior RS performance and the RS performance keeps improving with an increase in the number of observed samples, even after the CA performance has converged to its maximum. This is a crucial result as it illustrates that RS is able to capture the performance of ML algorithms in the context of nDMOC problems while CA cannot
    • …
    corecore